32 research outputs found
Learning General Audio Representations with Large-Scale Training of Patchout Audio Transformers
The success of supervised deep learning methods is largely due to their
ability to learn relevant features from raw data. Deep Neural Networks (DNNs)
trained on large-scale datasets are capable of capturing a diverse set of
features, and learning a representation that can generalize onto unseen tasks
and datasets that are from the same domain. Hence, these models can be used as
powerful feature extractors, in combination with shallower models as
classifiers, for smaller tasks and datasets where the amount of training data
is insufficient for learning an end-to-end model from scratch. During the past
years, Convolutional Neural Networks (CNNs) have largely been the method of
choice for audio processing. However, recently attention-based transformer
models have demonstrated great potential in supervised settings, outperforming
CNNs. In this work, we investigate the use of audio transformers trained on
large-scale datasets to learn general-purpose representations. We study how the
different setups in these audio transformers affect the quality of their
embeddings. We experiment with the models' time resolution, extracted embedding
level, and receptive fields in order to see how they affect performance on a
variety of tasks and datasets, following the HEAR 2021 NeurIPS challenge
evaluation setup. Our results show that representations extracted by audio
transformers outperform CNN representations. Furthermore, we will show that
transformers trained on Audioset can be extremely effective representation
extractors for a wide range of downstream tasks.Comment: will apear in HEAR: Holistic Evaluation of Audio Representations
Proceedings of Machine Learning Research PMLR 166. Source code:
https://github.com/kkoutini/passt_hear2